In January, we gave you a first-hand glimpse of what our AI technology can do. In February, we launched our AI writing resource page to make it easier for you to track our progress while sharing information and providing you with access to pedagogical resources, blog posts focused on AI writing, media coverage, and much more.
We’ve come a long way since then. Continuing with the theme of sharing updates on how our AI writing detection technology is performing in our AI Innovation Lab, we’d like to share some insight on how our model deals with false positives and what constitutes a false positive.
But first, let’s understand what a false positive in AI writing detection means. A false positive refers to incorrectly identifying fully human-written text as AI-generated.
It’s first important to emphasize that Turnitin’s AI writing detection focuses on accuracy—if we say there’s AI writing, we’re very sure there is. Our efforts have primarily been on ensuring a high accuracy rate accompanied by a less than 1% false positive rate, to ensure that students are not falsely accused of any misconduct.
However, we want to acknowledge that there is still a small risk of false positives. Watch this short video where David Adamson, an AI scientist at Turnitin and a former high school teacher, explains more about false positives and where we might get it wrong.
We’d like to emphasize that Turnitin does not make a determination of misconduct even in the space of text similarity; rather, we provide data for educators to make an informed decision based on their academic and institutional policies. The same is true for our AI writing detection—given that our false positive rate is not zero, you as the instructor will need to apply your professional judgment, knowledge of your students, and the specific context surrounding the assignment.
Making that decision should include alignment with institutional policies, the expectations you have set for your course or assignment, and an understanding of exactly what you are seeking to evaluate through the assignment.
We know that discussions about academic integrity, in general, can be challenging, and the sudden introduction of new variables like AI writing tools and the possibility of false positives won’t make that any easier. To support you in these situations, we want to provide you with some general tips and ask you to return to our blog in the coming weeks as we will be delivering a deep dive on how both instructors and students should conduct these conversations.
- Know before you go—make sure you consider the possibility of a false positive upfront and have a plan for what your process and approach will be for determining the outcome. Even better, communicate that to students so that you have a shared set of expectations.
- Assume positive intent—in this space of so much that is new and unknown, give students the strong benefit of the doubt. If the evidence is unclear, assume students will act with integrity.
- Be open and honest—it is important to acknowledge that there may be false positives upfront, so both the instructor and the student should be prepared to have an open and honest dialogue. If you don’t acknowledge that a false positive may occur, it will lead to a far more defensive and confrontational interaction that could ultimately damage relationships with students.
Need tips on how you as an educator can proactively respond to AI in your classroom? Read our blog post on ways to prepare writing assignments in the age of AI.